186 research outputs found

    Do Social Bots Dream of Electric Sheep? A Categorisation of Social Media Bot Accounts

    Get PDF
    So-called 'social bots' have garnered a lot of attention lately. Previous research showed that they attempted to influence political events such as the Brexit referendum and the US presidential elections. It remains, however, somewhat unclear what exactly can be understood by the term 'social bot'. This paper addresses the need to better understand the intentions of bots on social media and to develop a shared understanding of how 'social' bots differ from other types of bots. We thus describe a systematic review of publications that researched bot accounts on social media. Based on the results of this literature review, we propose a scheme for categorising bot accounts on social media sites. Our scheme groups bot accounts by two dimensions - Imitation of human behaviour and Intent.Comment: Accepted for publication in the Proceedings of the Australasian Conference on Information Systems, 201

    The Impact of Social Media on Social Cohesion: A Double‐Edged Sword

    Get PDF
    Social media plays a major role in public communication in many countries. Therefore, it has a large impact on societies and their cohesion. This thematic issue explores the impact social media has on social cohesion on a local or national level. The nine articles in this issue focus on both the potential of social media usage to foster social cohesion and the possible drawbacks of social media which could negatively influence the development and maintenance of social cohesion. In the articles, social cohesion is examined from different perspectives with or without the background of crisis, and on various social media platforms. The picture that emerges is that of social media as, to borrow a phrase used in one of the articles, a double-edged sword

    Rumour Detection in the Wild: A Browser Extension for Twitter

    Get PDF
    Rumour detection, particularly on social media, has gained popularity in recent years. The machine learning community has made significant contributions in investigating automatic methods to detect rumours on such platforms. However, these state-of-the-art (SoTA) models are often deployed by social media companies; ordinary end-users cannot leverage the solutions in the literature for their own rumour detection. To address this issue, we put forward a novel browser extension that allows these users to perform rumour detection on Twitter. Particularly, we leverage the performance from SoTA architectures, which has not been done previously. Initial results from a user study confirm that this browser extension provides benefit. Additionally, we examine the performance of our browser extension's rumour detection model in a simulated deployment environment. Our results show that additional infrastructure for the browser extension is required to ensure its usability when deployed as a live service for Twitter users at scale

    Temporal Generalizability in Multimodal Misinformation Detection

    Get PDF
    Misinformation detection models degrade in performance over time, but the precise causes of this remain under-researched, in particular for multimodal models. We present experiments investigating the impact of temporal shift on performance of multimodal automatic misinformation detection classifiers. Working with the r/Fakeddit dataset, we found that evaluating models on temporally out-of-domain data (i.e. data from time stretches unseen in training) results in a non-linear, 7-8% drop in macro F1 as compared to traditional evaluation strategies (which do not control for the effect of content change over time). Focusing on two factors that make temporal generalizability in misinformation detection difficult, content shift and class distribution shift, we found that content shift has a stronger effect on recall. Within the context of coarse-grained vs. fine-grained misinformation detection with r/Fakeddit, we find that certain misinformation classes seem to be more stable with respect to content shift (e.g. Manipulated and Misleading Content). Our results indicate that future research efforts need to explicitly account for the temporal nature of misinformation to ensure that experiments reflect expected real-world performance

    Biases in Large Language Models: Origins, Inventory and Discussion

    Get PDF
    corecore